Personnel
Overall Objectives
Research Program
Application Domains
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Parallel and Distributed Verification

Distributed State Space Manipulation

Participants : Hubert Garavel, Wendelin Serwe.

For distributed verification, CADP provides the PBG format, which implements the theoretical concept of Partitioned LTS  [34] and provides a unified access to an LTS distributed over a set of remote machines.

In 2017, many changes have been done to simplify the code of the CAESAR_NETWORK_1 communication library, which is the backbone of the distributed verification tools of CADP, as well as the code of other tools such as BCG_MIN, but most of these changes are not directly observable by end users. In addition to two bug fixes in CAESAR_NETWORK_1 and two other bug fixes in the BES_SOLVE tool, the error messages displayed by the various tools and the statistical information produced by the “-stat ” option of BES_SOLVE have been made more concise and more informative.

Debugging of Concurrent Systems

Participants : Gianluca Barbon, Gwen Salaün.

Model checking is an established technique for automatically verifying that a model satisfies a given temporal property. When the model violates the property, the model checker returns a counterexample, which is a sequence of actions leading to a state where the property is not satisfied. Understanding this counterexample for debugging the specification is a complicated task for several reasons: (i) the counterexample can contain hundreds of actions, (ii) the debugging task is mostly achieved manually, and (iii) the counterexample does not explicitly highlight the source of the bug that is hidden in the model.

We proposed an approach that improves the usability of model checking by simplifying the comprehension of counterexamples. Our solution aims at keeping only actions in counterexamples that are relevant for debugging purposes. To do so, we first extract in the model all the counterexamples. Second, we define an analysis algorithm that identifies actions that make the model skip from incorrect to correct behaviours, making these actions relevant from a debugging perspective. Our approach is fully automated by a tool we implemented and applied on real-world case studies from various application areas for evaluation purposes. This work led to a publication in an international conference [11].

In 2017, we focused on extending our approach following three directions: (a) we introduced new notions to identify new types of relevant actions; (b) we developed a set of heuristics to extract these actions from counterexamples; (c) we proposed an alternative approach to focus on a broader range of properties (i.e., liveness properties). These new extensions have been integrated into our tool. A paper was submitted to an international journal.